Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Cross-lingual zero-resource named entity recognition model based on sentence-level generative adversarial network
Xiaoyan ZHANG, Zhengyu DUAN
Journal of Computer Applications    2023, 43 (8): 2406-2411.   DOI: 10.11772/j.issn.1001-9081.2022071124
Abstract234)   HTML15)    PDF (963KB)(152)       Save

To address the problem of lack of labeled data in low-resource languages, which prevents the use of existing mature deep learning methods for Named Entity Recognition (NER), a cross-lingual NER model based on sentence-level Generative Adversarial Network (GAN), namely SLGAN-XLM-R (Sentence Level GAN based on XLM-R), was proposed. Firstly, the labeled data of the source language was used to train the NER model on the basis of the pre-trained model XLM-R (XLM-Robustly optimized BERT pretraining approach). At the same time, the linguistic adversarial training was performed on the embedding layer of XLM-R model by combining the unlabeled data of the target language. Then, the soft labels of the unlabeled data of the target language were predicted by using the NER model, Finally the labeled data of the source language and the target language was mixed to fine-tune the model again to obtain the final NER model. Experiments were conducted on four languages, English, German, Spanish, and Dutch, in two datasets, CoNLL2002 and CoNLL2003. The results show that with English as the source language, the F1 scores of SLGAN-XLM-R model on the test sets of German, Spanish, and Dutch are 72.70%, 79.42%, and 80.03%, respectively, which are 5.38, 5.38, and 3.05 percentage points higher compared to those of the direct fine-tuning on XLM-R model.

Table and Figures | Reference | Related Articles | Metrics
Survey on combination of computation offloading and blockchain in internet of things
Rui MEN, Shujia FAN, Axida SHAN, Shaoyu DU, Xiumei FAN
Journal of Computer Applications    2023, 43 (10): 3008-3016.   DOI: 10.11772/j.issn.1001-9081.2022091466
Abstract467)   HTML26)    PDF (882KB)(207)       Save

With the recent development of mobile communication technology and the popularization of smart devices, the computation-intensive tasks of the terminal devices can be offloaded to edge servers to solve the problem of insufficient resources. However, the distributed nature of computation offloading technology exposes terminal devices and edge servers to security risks. And, blockchain technology can provide a safe environment transaction for the computation offloading system. The combination of the above two technologies can solve the insufficient resource and the security problems in internet of things. Therefore, the research results of applications combining computation offloading with blockchain technologies in internet of things were surveyed. Firstly, the application scenarios and system functions in the combination of computation offloading and blockchain technologies were analyzed. Then, the main problems solved by blockchain technology and the key techniques used in this technology were summarized in the computation offloading system. The formulation methods, optimization objectives and optimization algorithms of computation offloading strategies in the blockchain system were classified. Finally, the problems in the combination were provided, and the future directions of development in this area were prospected.

Table and Figures | Reference | Related Articles | Metrics
Non-intrusive load identification algorithm based on convolutional neural network with upsampling pyramid structure
Yu DU, Meng YAN, Xin WU
Journal of Computer Applications    2022, 42 (10): 3300-3306.   DOI: 10.11772/j.issn.1001-9081.2021081512
Abstract250)   HTML7)    PDF (3366KB)(63)    PDF(mobile) (807KB)(9)    Save

Non-Intrusive Load Monitoring (NILM) technology provides technical support for demand side management, and non-intrusive load identification is the key link in the process of load monitoring. The long-term sampling process of load data cannot be carried out in real time and high frequency, and the time sequence of the obtained load data is lost. At the same time, the defect of insufficient representation of low-level signal features occurs in Convolution Neural Network (CNN). In view of the above two problems, a CNN based non-intrusive load identification algorithm with upsampling pyramid structure was proposed. In the proposed algorithm, with direct orientation to the collected load current signals, the time sequence of the data was compensated by the relevant information in the time dimension of the upsampling network expanded data, and the high-level and low-level features of load signals were extracted by the bidirectional pyramid one-dimensional convolution, so that the load characteristics were fully utilized. As a result, the purpose of identifying unknown load signals can be achieved. Experimental results show that the recognition accuracy of non-intrusive load identification algorithm based on CNN with upsampling pyramid structure can reach 95.21%, indicating that the proposed algorithm has a good generalization ability, and can effectively realize load identification.

Table and Figures | Reference | Related Articles | Metrics
Data center server energy consumption optimization algorithm combining XGBoost and Multi-GRU
Mingyao SHEN, Meng HAN, Shiyu DU, Rui SUN, Chunyan ZHANG
Journal of Computer Applications    2022, 42 (1): 198-208.   DOI: 10.11772/j.issn.1001-9081.2021071291
Abstract410)   HTML18)    PDF (1169KB)(122)       Save

With the rapid development of cloud computing technology, the number of data centers have increased significantly, and the subsequent energy consumption problem gradually become one of the research hotspots. Aiming at the problem of server energy consumption optimization, a data center server energy consumption optimization combining eXtreme Gradient Boosting (XGBoost) and Multi-Gated Recurrent Unit (Multi-GRU) (ECOXG) algorithm was proposed. Firstly, the data such as resource occupation information and energy consumption of each component of the servers were collected by the Linux terminal monitoring commands and power consumption meters, and the data were preprocessed to obtain the resource utilization rates. Secondly, the resource utilization rates were constructed in series into a time series in vector form, which was used to train the Multi-GRU load prediction model, and the simulated frequency reduction was performed to the servers according to the prediction results to obtain the load data after frequency reduction. Thirdly, the resource utilization rates of the servers were combined with the energy consumption data at the same time to train the XGBoost energy consumption prediction model. Finally, the load data after frequency reduction were input into the trained XGBoost model, and the energy consumption of the servers after frequency reduction was predicted. Experiments on the actual resource utilization data of 6 physical servers showed that ECOXG algorithm had a Root Mean Square Error (RMSE) reduced by 50.9%, 31.0%, 32.7%, 22.9% compared with Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM) network, CNN-GRU and CNN-LSTM models, respectively. Meanwhile, compared with LSTM, CNN-GRU and CNN-LSTM models, ECOXG algorithm saved 43.2%, 47.1%, 59.9% training time, respectively. Experimental results show that ECOXG algorithm can provide a theoretical basis for the prediction and optimization of server energy consumption optimization, and it is significantly better than the comparison algorithms in accuracy and operating efficiency. In addition, the power consumption of the server after the simulated frequency reduction is significantly lower than the real power consumption, and the effect of reducing energy consumption is outstanding when the utilization rates of the servers are low.

Table and Figures | Reference | Related Articles | Metrics
Entity association query system based on enterprise knowledge graph construction
YU Dunhui, WAN Peng, WANG She
Journal of Computer Applications    2021, 41 (9): 2510-2516.   DOI: 10.11772/j.issn.1001-9081.2020111768
Abstract424)      PDF (2446KB)(511)       Save
Concerning the problem of low semantic relevance between nodes and low query efficiency in the current knowledge graph query, an entity-related query method was proposed,and then a knowledge gragh based enterprise query system was designed and implemented base on it. In this method, a four-layer filtering model was adopted. And firstly, the common paths of the target node were found through path search, so that the query nodes with a low degree of relevance were filtered out, and the filtering set was obtained. Then, the relevance degrees of the filtering set's attributes and relationships were calculated in the middle two layers, after that, the graph set filtering was performed based on the dynamic threshold. Finally, the entity relevance and relationship relevance scores was integrated and sorted to obtain the final query result. Experimental results on real enterprise data show that compared with traditional graph query algorithms such as Ness and NeMa, the proposed method reduces the query time by an average of 28.5%, and at the same time increases the filtering performance by an average of 29.6%, verifying that the algorithm can efficiently complete the task of query and display entities associated with the target.
Reference | Related Articles | Metrics
User recommendation method of cross-platform based on knowledge graph and restart random walk
YU Dunhui, ZHANG Luyi, ZHANG Xiaoxiao, MAO Liang
Journal of Computer Applications    2021, 41 (7): 1871-1877.   DOI: 10.11772/j.issn.1001-9081.2020111745
Abstract384)      PDF (1188KB)(529)       Save
Aiming at the problems of the single result of recommending similar users and insufficient understanding of user interests and behavior information for single social network platforms, a User Recommendation method of Cross-Platform based on Knowledge graph and Restart random walk (URCP-KR) was proposed. First, in the similar subgraphs segmented and matched by the target platform graph and the auxiliary platform graph, an improved multi-layer Recurrent Neural Network (RNN) was used to predict the candidate user entities. And the similar users were selected by comprehensive use of the similarity of topological structure features and user portrait similarity. Then, the relationship information of similar users in the auxiliary platform graph was used to complete the target platform graph. Finally, the probabilities of the users in the target platform graph walking to each user in the community were calculated, so that the interest similarity between users was obtained to realize the user recommendation. Experimental results show that the proposed method has higher recommendation precision and diversity than Collaborative Filtering (CF) algorithm, User Recommendation algorithm based on Cross-Platform online social network (URCP) and User Recommendation algorithm based on Multi-developer Community (UR-MC) with the recommendation precision up to 95.31% and the recommendation coverage up to 88.42%.
Reference | Related Articles | Metrics
Video recommendation algorithm based on danmaku sentiment analysis and topic model
ZHU Simiao, Wei Shiwei, WEI Siheng, YU Dunhui
Journal of Computer Applications    2021, 41 (10): 2813-2819.   DOI: 10.11772/j.issn.1001-9081.2020121997
Abstract461)      PDF (852KB)(354)       Save
A large number of self-made videos on the Internet lack user ratings and the recommendation accuracies of them are not high. In order to solve the problems, a Video Recommendation algorithm based on Danmaku Sentiment Analysis and topic model (VRDSA) was proposed. Firstly, sentiment analysis was performed to video' danmaku comments to obtain the sentiment vectors of the videos, which were used to calculate the emotional similarities between the videos. At the same time, based on the tags of videos, a topic model was built to obtain the topic distribution of the video tags which was used to calculate the topic similarities between the videos. Secondly, the emotional similarities and topic similarities were merged to calculate synthesis similarities between the videos. Thirdly, combined with the comprehensive similarities between the videos and the user's history records, the user preference for videos was obtained. At the same time, the video public recognitions were quantified by user interaction metrics such as the number of likes, danmakus and collections, and the comprehensive recognitions of the videos were calculated by combining the user's history records. Finally, based on the user preference and video comprehensive recognitions, the user's recognitions of videos were predicted, and a personalized recommendation list was generated to complete the video recommendation. Experimental results show that, compared with Danmaku video Recommendation algorithm combing Collaborative Filtering and Topic model (DRCFT) and Unifying LDA (Latent Dirichlet Allocation) and Ratings Collaborative Filtering (ULR-itemCF), the proposed algorithm has the precision increased by 17.1% on average, the recall increased by 22.9% on average, and the F1 increased by 22.2% on average. The proposed algorithm completes the recommendation of videos by analyzing the sentiments of danmakus and integrating the topic model, and fully exploits the emotionality of damaku data to make the recommendation results more accurate.
Reference | Related Articles | Metrics
Spatial crowdsourcing task allocation algorithm for global optimization
NIE Xichan, ZHANG Yang, YU Dunhui, ZHANG Xingsheng
Journal of Computer Applications    2020, 40 (7): 1950-1958.   DOI: 10.11772/j.issn.1001-9081.2019112025
Abstract479)      PDF (1314KB)(639)       Save
Concerning the problem that in the research of spatial crowdsourcing task allocation, the benefits of multiple participants and the global optimization of continuous task allocation are not considered, which leads to the problem of poor allocation effect, an online task allocation algorithm was proposed for the global optimization of tripartite comprehensive benefit. Firstly, the distribution of crowdsourcing objects (crowdsourcing tasks and workers) in the next time stamp was predicted based on online random forest and gated recurrent unit network. Then, a bipartite graph model was constructed based on the situation of crowdsourcing objects in the current time stamp. Finally, the optimal matching algorithm of weighted bipartite graph was used to complete the task allocation. The experimental results show that the proposed algorithm realize the global optimization of continuous task allocation. Compared with greedy algorithm, this algorithm improves the success rate of task allocation by 25.7%, the average comprehensive benefit by 32.2% and the average opportunity cost of workers by 37.8%; compared with random threshold algorithm, the algorithm improves the success rate of task allocation by 27.4%, the average comprehensive benefit by 34.7% and the average opportunity cost of workers by 40.2%.
Reference | Related Articles | Metrics
Spatiotemporal crowdsourcing online task allocation algorithm based ondynamic threshold
YU Dunhui, YUAN Xu, ZHANG Wanshan, WANG Chenxu
Journal of Computer Applications    2020, 40 (3): 658-664.   DOI: 10.11772/j.issn.1001-9081.2019071282
Abstract302)      PDF (974KB)(715)       Save
In order to improve the total utility of task allocation in spatiotemporal crowdsourcing dynamic reality, a Dynamic Threshold algorithm based on online Random Forest (DTRF) was proposed. Firstly, the online random forest was initialized based on the historical matching data of workers and tasks on the crowdsourcing platform. Then, the online random forest was used to predict the expected task return rate of each worker as the threshold, and the candidate matching set was selected for each worker according to the threshold. Finally, the matching with the highest sum of current utility was selected from the candidate match set, and the online random forest was updated based on the allocation result. The experiments show that the algorithm can improve the average income of workers while increasing the total utility. Compared with the greedy algorithm, the proposed algorithm has the task assignment rate increased by 4.1%, the total utility increased by 18.2%, and the average worker income increased by 11.2%. Compared with the random threshold algorithm, this algorithm has a better improvement in task allocation rate, total utility, average income of workers with better stability.
Reference | Related Articles | Metrics
Time utility balanced online task assignment algorithm under spatial crowdsourcing environment
ZHANG Xingsheng, YU Dunhui, ZHANG Wanshan, WANG Chenxu
Journal of Computer Applications    2019, 39 (5): 1357-1363.   DOI: 10.11772/j.issn.1001-9081.2018092027
Abstract1425)      PDF (1051KB)(407)       Save
Focusing on the poor overall allocation effect due to the total utility of task allocation or task waiting time being considered respectively in the study of task allocation under spatial crowdsourcing environment, a dynamic threshold algorithm based on allocation time factor was proposed. Firstly, the allocation time factor of task was calculated based on the estimated waiting time and the already waiting time. Secondly, the task allocation order was obtained by comprehensively considering the return value of task and the allocation time factor. Thirdly, the dynamic adjustment item was added based on the initial value to set the threshold for each task. Finally, candidate matching set was set for each task according to the threshold condition, and the candidate matching pair with the largest matching coefficient was selected from the candidate matching set to join the result set, and the task allocation was completed. When the task allocation rate was 95.8%, compared with greedy algorithm, the proposed algorithm increased total allocation utility by 20.4%; compared with random threshold algorithm, it increased total allocation utility by 17.8% and decreased task average waiting time by 13.2%; compared with Two phase based Global Online Allocation-Greedy (TGOA-Greedy) algorithm, it increased total allocation utility by 13.9%. The experimental results show that proposed algorithm can shorten the average waiting time of task while improving the total utility of task allocation, to achieve the balance between the total allocation utility and the task waiting time.
Reference | Related Articles | Metrics
Software crowdsourcing worker selection mechanism based on active time grouping
ZHOU Zhuang, YU Dunhui, ZHANG Wanshan, WANG Yi
Journal of Computer Applications    2019, 39 (2): 528-533.   DOI: 10.11772/j.issn.1001-9081.2018061309
Abstract442)      PDF (953KB)(285)       Save
Concerning the problem that existing software crowdsourcing worker selection mechanisms do not consider the collaboration among workers, a crowdsourcing worker selection mechanism with bidding model based on active time grouping was proposed. Firstly, crowd-sourced workers were divided into multiple collaborative working groups based on active time. Then, the weights of the working groups were calculated according to the development capabilities of the workers in the group and collaboration factors. Finally, the collaborative working group with the highest weight was selected as the optimal working group, and the most suitable worker from this group was selected for each task module according to the complexity of the module. The experimental results show that the proposed mechanism has a gap of only 0.57% in the average worker ability compared to the ability only allocation method. At the same time, it reduces the project risk by an average of 32% due to the ensurence of the cooperation between workers, which can effectively guide the selection of workers for multi-person collaborative crowdsourcing software tasks.
Reference | Related Articles | Metrics
Priority calculation method of software crowdsourcing task release
ZHAO Kunsong, YU Dunhui, ZHANG Wanshan
Journal of Computer Applications    2018, 38 (7): 2032-2036.   DOI: 10.11772/j.issn.1001-9081.2018010001
Abstract558)      PDF (757KB)(367)       Save
Aiming at the problem that the existing software crowdsourcing platforms do not consider the order of task release, a method of calculating Task Release Priority (TRP) of software crowdsourcing based on task publisher weight and task weight was proposed. Firstly, a time weight function based on semi-sinusoidal curve was used to measure the activity of the task publisher and the cumulative turnover of the task, so as to calculate the weight of the task publisher. Secondly, the task complexity was calculated according to the system architecture diagram and data flow diagram to measure module complexity, design complexity and data complexity, and the task benefit factor and task emergency factor were calculated based on task quotation and task duration. In this way, the task weight was calculated. Finally, the task publishing priority would be given according to task publisher weight and task weight. The experimental results show that the proposed algorithm not only is effective and reasonable, but also has a maximum success rate of 98%.
Reference | Related Articles | Metrics
Ability dynamic measurement algorithm for software crowdsourcing workers
YU Dunhui, WANG Yi, ZHANG Wanshan
Journal of Computer Applications    2018, 38 (12): 3612-3617.   DOI: 10.11772/j.issn.1001-9081.2018040900
Abstract724)      PDF (968KB)(293)       Save
The existing software crowdsourcing platforms do not consider the ability of workers adequately, which leads to the low completion quality of tasks assigned to workers. In order to solve the problem, a new Ability Dynamic Measurement algorithm (ADM) for software crowdsourcing workers was proposed to achieve the dynamic measurement of the workers' ability. Firstly, the initial ability of a worker was calculated based on his static skill coverage rate. Secondly, for the single task completed by the worker in history, task complexity, task completion quality, and task development timeliness were integrated to realize the calculation of development ability, and the development ability decaying with time was calculated according to a time factor. Then, according to the time sequence of all the completed tasks in history, the dynamic update of ability measurement value was realized. Finally, the development ability of the worker for a task to be assigned was calculated based on the skill coverage rates of historical tasks. The experimental results show that, compared with the user reliability measurement algorithm, the proposed ability dynamic measurement algorithm has a better rationality and effectiveness, and the average coincidence degree of ability measurement is up to 90.5%, which can effectively guide task assignment.
Reference | Related Articles | Metrics
Personalized recommendation algorithm based on location bitcode tree
LIANG Junjie, GAN Wenting, YU Dunhui
Journal of Computer Applications    2016, 36 (2): 419-423.   DOI: 10.11772/j.issn.1001-9081.2016.02.0419
Abstract438)      PDF (915KB)(858)       Save
Since collaborative filtering recommendation algorithm is inefficient in large data environment, a personalized recommendation algorithm based on location bitcode tree, called LB-Tree, was developed. Combined with the characteristics of the MapReduce framework, a novel approach which applyed the index structure in personalized recommendation processing was proposed. For efficient parallel computing in MapReduce, a novel storage strategy based on the differences between clusters was presented. According to the distribution, each cluster was partitioned into several layers by concentric circles with the same centroid, and each layer was expressed by binary bitcodes with different length. To make the frequently recommended data search path shorter and quickly determine the search space by using the index structure, an index tree was constructed by bitcodes of all the layers. Compared with the Top- N recommendation algorithm and Similarity-Based Neighborhood Method (SBNM), LB-Tree has the highest accuracy with the slowest time-increasing, which verifies the effectiveness and efficiency of LB-Tree.
Reference | Related Articles | Metrics
Remote sensing image enhancement based on combination of non-subsampled shearlet transform and guided filtering
LYU Duliang, JIA Zhenhong, YANG Jie, Nikola KASABOV
Journal of Computer Applications    2016, 36 (10): 2880-2884.   DOI: 10.11772/j.issn.1001-9081.2016.10.2880
Abstract531)      PDF (883KB)(417)       Save
Aiming at the problem of low contrast, lack of the details and weakness of edge gradient retention in remote sensing images, a new remote sensing image enhancement method based on the combination of Non-Subsampled Shearlet Transform (NSST) and guided filtering was proposed. Firstly, the input image was decomposed into a low-frequency component and several high-frequency components by NSST. Then a linear stretch was adopted for the low-frequency component to improve the overall contrast of the image, and the adaptive threshold method was used to restrain the noise in the high-frequency components. After denoising, the high-frequency components were enhanced by guided filtering to improve the detail information and edge-gradient retention ability. Finally, the final enhanced image was reconstructed by applying the inverse NSST to the processed low-frequency and high-frequency components. Experimental results show that, compared with the Histogram Equalization (HE), image enhancement based on contourlet transform and fuzzy theory, remote sensing image enhancement based on nonsubsampled contourlet transform and unsharp masking as well as remote sensing image enhancement based on non-subsampled shearlet transform and parameterized logarithmic image processing, the proposed method can effectively increase the information entropy, the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measurement (SSIM), which can obviously improve the visual effect of the image and make the texture of the image more clear.
Reference | Related Articles | Metrics
Web text clustering method based on topic
ZHANG Wanshan Xiaoyao LIANG Junjie YU Dunhui
Journal of Computer Applications    2014, 34 (11): 3144-3146.   DOI: 10.11772/j.issn.1001-9081.2014.11.3144
Abstract202)      PDF (577KB)(559)       Save

Concerning that the traditional Web text clustering algorithm without considering the Web text topic information leads to a low accuracy rate of multi-topic Web text clustering, a new algorithm was proposed for Web text clustering based on the topic theme. In the method, multi-topic Web text was clustered by three steps: topic extraction, feature extraction and text clustering. Compared to the traditional Web text clustering algorithm, the proposed method fully considered the Web text topic information. The experimental results show that the accuracy rate of the proposed algorithm for multi-topic Web text clustering is higher than the text clustering method based on K-means or HowNet.

Reference | Related Articles | Metrics
Personalization recommendation algorithm for Web resources based on ontology
LIANG Junjie LIU Qiongni YU Dunhui
Journal of Computer Applications    2014, 34 (11): 3135-3139.   DOI: 10.11772/j.issn.1001-9081.2014.11.3135
Abstract272)      PDF (752KB)(537)       Save

To improve the accuracy of recommended Web resources, a personalized recommendation algorithm based on ontology, named BO-RM, was proposed. Subject extraction and similarity measurement methods were designed, and ontology semantic was used to cluster Web resources. With a user's browser tracks captured, the tendency of preferences and recommendation were adjusted dynamically. Comparison experiments with collaborative filtering algorithm based on situation named CFR-RM and personalized prediction algorithm based on model were given. The results show that BO-RM has relatively stable overhead time and good performance in Mean Reciprocal Rank (MRR) and Mean Average Precision (MAP). The results prove that BO-RM improves the efficiency by using offline data analysis for large Web resources, thus it is practical. In addition, BO-RM captures the users' interest in real-time to updates the recommendation list dynamically, which meets the real needs of users.

Reference | Related Articles | Metrics
Common virtual wind speed sensors for wind farm based on finite impulse response neural network
SU Yong-xin LUO Pei-yu DUAN Bin
Journal of Computer Applications    2012, 32 (05): 1446-1449.  
Abstract1163)      PDF (2171KB)(601)       Save
Wind speed sensors of wind turbines has high fault rate, those faults may lead wind turbines into safety risks and energy production loss. However, many current methods of improving reliability of wind speed information face the challenges of high cost or high error. A virtual wind speed sensor based on spatial correlation was presented in this paper. Its key character is generating a downwind turbine's logic wind speed based on a special upwind turbine's real wind speed. The calculation mode of logic wind speed just need the outputs of the existed wind speed and direction sensors. A FIR (Finite Impulse Response) neural network computing model was proposed for deal with the complexity of logic wind speed calculating. Moreover,key technologies were discussed for building the virtual wind speed sensor system. The logic wind speed generated by virtual sensor can be used for providing reliable wind speed information input to a turbine controller. The approach presented in this paper is applicable to wind turbines of any type in wind farms.
Reference | Related Articles | Metrics